How to learn an effective reinforcement learning-based model for control tasks from high-level visual observations is a practical and challenging problem. A key to solving this problem is to learn low-dimensional state representations from observations, from which an effective policy can be learned. In order to boost the learning of state encoding, recent works are focused on capturing behavioral similarities between state representations or applying data augmentation on visual observations. In this paper, we propose a novel meta-learner-based framework for representation learning regarding behavioral similarities for reinforcement learning. Specifically, our framework encodes the high-dimensional observations into two decomposed embeddings regarding reward and dynamics in a Markov Decision Process (MDP). A pair of meta-learners are developed, one of which quantifies the reward similarity and the other quantifies dynamics similarity over the correspondingly decomposed embeddings. The meta-learners are self-learned to update the state embeddings by approximating two disjoint terms in on-policy bisimulation metric. To incorporate the reward and dynamics terms, we further develop a strategy to adaptively balance their impacts based on different tasks or environments. We empirically demonstrate that our proposed framework outperforms state-of-the-art baselines on several benchmarks, including conventional DM Control Suite, Distracting DM Control Suite and a self-driving task CARLA.
translated by 谷歌翻译
学习优化是一个快速增长的领域,旨在使用机器学习(ML)来解决优化问题或改善现有的优化算法。特别是,图形神经网络(GNN)被认为是用于优化问题的合适ML模型,其变量和约束是置换的 - 例如线性程序(LP)。尽管文献报道了令人鼓舞的数值结果,但本文确定了将GNN应用于解决LP的理论基础。给定LPS的任何尺寸限制,我们构造了一个GNN,该GNN将不同的LP映射到不同的输出。我们表明,正确构建的GNN可以可靠地预测广泛类别中每个LP的可行性,界限和最佳解决方案。我们的证明是基于最近发现的Weisfeiler-Lehman同构测试与GNN之间的联系。为了验证我们的结果,我们培训了一个简单的GNN,并提出了将LP映射到其可行性和解决方案中的准确性。
translated by 谷歌翻译
视频对象细分(VOS)是视频理解的基础。基于变压器的方法在半监督VOS上显示出显着的性能改善。但是,现有的工作面临着挑战在彼此近距离接近视觉上类似对象的挑战。在本文中,我们提出了一种新型的双边注意力变压器,以进行半监督VO的运动出现空间(蝙蝠侠)。它通过新型的光流校准模块在视频中捕获对象运动,该模块将分割面膜与光流估计融合在一起,以改善对象内光流平滑度并减少物体边界处的噪声。然后在我们的新型双边注意力中采用了这种校准的光流,该流动流在相邻双边空间中的查询和参考帧之间的对应关系考虑,考虑到运动和外观。广泛的实验通过在所有四个流行的VOS基准上胜过所有现有最新的实验:YouTube-VOS 2019(85.0%),YouTube-VOS 2018(85.3%),Davis 2017VAL/TESTDEV(86.2.2 %/82.2%)和戴维斯(Davis)2016(92.5%)。
translated by 谷歌翻译
本文为旋转组开发了旋转不变的阵阵卷积,因此(3)可以提炼球形信号的多尺度信息。球形的阵头变换从$ \ mathbb {s}^2 $推广到SO(3)组,该组通过一组紧密的Framelet操作员将球形信号分解为近似和详细的光谱系数。分解和重建过程中的球形信号实现了旋转不变性。基于阵型变换,我们形成了一个带有多个SO(3)一面卷积层的NEDLET近似均值球形CNN(NES)。该网络建立了一个强大的工具,可以提取球形信号的几何不变特征。该模型允许具有多分辨率表示的足够网络可伸缩性。通过小波收缩激活函数学习了强大的信号嵌入,该函数会过滤冗余高通表示,同时保持近似旋转不变性。 NES实现了量子化学回归和宇宙微波背景(CMB)的最新性能,删除重建,这显示了通过高分辨率和多尺度球形信号表示解决科学挑战的巨大潜力。
translated by 谷歌翻译
收集足够标记的数据以建立人类活动识别(HAR)模型是昂贵且耗时的。对现有数据的培训通常会使模型偏向于培训数据的分布,因此该模型可能会在具有不同分布的测试数据上执行。尽管现有的转移学习和域适应性的努力试图解决上述问题,但他们仍然需要访问目标域上的未标记数据,这在实际情况下可能是不可能的。很少有作品注意训练一个模型,该模型可以很好地概括为HAR看不见的目标域。在本文中,我们提出了一种新的方法,称为可推广跨域HAR的语义歧视混合(SDMIX)。首先,我们介绍了语义感知的混音,该混音考虑了活动语义范围,以克服域差异带来的语义不一致。其次,我们引入了较大的利润损失,以增强混合歧视,以防止虚拟标签带来的错误分类。在五个公共数据集上进行的综合概括实验表明,我们的SDMIX基本上优于最先进的方法,其平均准确度提高了跨人员,交叉数据库和交叉位置HAR的平均准确性6%。
translated by 谷歌翻译
我们研究了社交网络中的在线影响最大化(OIM)问题,其中在多个回合中,学习者反复选择种子节点以产生级联,观察级联反馈,并逐渐学习产生最大级联的最佳种子。我们专注于本文的两个主要挑战。首先,我们使用节点级反馈而不是边缘级反馈。边缘级别反馈显示通过级联中通过信息的所有边,其中节点级反馈仅显示使用时间戳的激活节点。节点级反馈可以说是更逼真的,因为在实践中,观察到谁受到影响,而且很难观察来自哪个关系(边缘)的影响。其次,我们使用标准离线Oracle而不是脱机对 - Oracle。为了计算下一轮的良好种子集,离线对 - Oracle同时找到最佳种子集和置信区内的最佳参数,并且由于OIM问题的组合核心,这种Oracle难以计算。因此,我们专注于如何使用标准离线影响最大化Oracle,它找到了将边缘参数作为输入的最佳种子集。在本文中,我们解决了这两个最受欢迎的扩散模型,独立级联(IC)和线性阈值(LT)模型的这些挑战。对于IC模型,过去的研究只实现了边缘级反馈,而我们介绍了第一个$ \ widetilde {o}(\ sqrt {t})$ - 遗憾的节点级反馈算法。此外,算法仅调用标准离线oracles。对于LT模型,最近的一项研究仅提供了一个符合第一个挑战的OIM解决方案,但仍需要一对甲骨文。在本文中,我们应用类似于IC模型的类似技术,以用标准的Oracle替换一对Oracle,同时维持$ \ widetilde {o}(\ sqrt {t})$ - 后悔。
translated by 谷歌翻译
对象攻击是对象检测的现实世界中可行的。然而,大多数以前的作品都试图学习应用于对象的本地“补丁”到愚蠢的探测器,这在斜视视角变得较低。为了解决这个问题,我们提出了致密的提案攻击(DPA)来学习探测器的单件,物理和针对性的对抗性伪装。伪装是一体的,因为它们是作为一个物体的整体生成的,因为当在任意观点和不同的照明条件下拍摄时,它们保持对抗性,并且由于它们可能导致探测器被定义为特定目标类别的检测器。为了使生成的伪装在物理世界中稳健,我们介绍了改造的组合来模拟物理现象。此外,为了改善攻击,DPA同时攻击固定建议中的所有分类。此外,我们使用Unity Simulation Engine构建虚拟3D场景,以公平地和可重复地评估不同的物理攻击。广泛的实验表明,DPA优于最先进的方法,并且对于任何物体而言,它是通用的,并且对现实世界的广泛性良好,对安全关键的计算机视觉系统构成潜在的威胁。
translated by 谷歌翻译
Detecting personal health mentions on social media is essential to complement existing health surveillance systems. However, annotating data for detecting health mentions at a large scale is a challenging task. This research employs a multitask learning framework to leverage available annotated data from a related task to improve the performance on the main task to detect personal health experiences mentioned in social media texts. Specifically, we focus on incorporating emotional information into our target task by using emotion detection as an auxiliary task. Our approach significantly improves a wide range of personal health mention detection tasks compared to a strong state-of-the-art baseline.
translated by 谷歌翻译
The health mention classification (HMC) task is the process of identifying and classifying mentions of health-related concepts in text. This can be useful for identifying and tracking the spread of diseases through social media posts. However, this is a non-trivial task. Here we build on recent studies suggesting that using emotional information may improve upon this task. Our study results in a framework for health mention classification that incorporates affective features. We present two methods, an intermediate task fine-tuning approach (implicit) and a multi-feature fusion approach (explicit) to incorporate emotions into our target task of HMC. We evaluated our approach on 5 HMC-related datasets from different social media platforms including three from Twitter, one from Reddit and another from a combination of social media sources. Extensive experiments demonstrate that our approach results in statistically significant performance gains on HMC tasks. By using the multi-feature fusion approach, we achieve at least a 3% improvement in F1 score over BERT baselines across all datasets. We also show that considering only negative emotions does not significantly affect performance on the HMC task. Additionally, our results indicate that HMC models infused with emotional knowledge are an effective alternative, especially when other HMC datasets are unavailable for domain-specific fine-tuning. The source code for our models is freely available at https://github.com/tahirlanre/Emotion_PHM.
translated by 谷歌翻译
In this paper we revisit endless online level generation with the recently proposed experience-driven procedural content generation via reinforcement learning (EDRL) framework, from an observation that EDRL tends to generate recurrent patterns. Inspired by this phenomenon, we formulate a notion of state space closure, which means that any state that may appear in an infinite-horizon online generation process can be found in a finite horizon. Through theoretical analysis we find that though state space closure arises a concern about diversity, it makes the EDRL trained on a finite-horizon generalised to the infinite-horizon scenario without deterioration of content quality. Moreover, we verify the quality and diversity of contents generated by EDRL via empirical studies on the widely used Super Mario Bros. benchmark. Experimental results reveal that the current EDRL approach's ability of generating diverse game levels is limited due to the state space closure, whereas it does not suffer from reward deterioration given a horizon longer than the one of training. Concluding our findings and analysis, we argue that future works in generating online diverse and high-quality contents via EDRL should address the issue of diversity on the premise of state space closure which ensures the quality.
translated by 谷歌翻译